|
In computer architecture, Gustafson's law (or Gustafson–Barsis' law) gives the theoretical speedup in latency of the execution of a task ''at fixed execution time'' that can be expected of a system whose resources are improved. It is named after computer scientist John L. Gustafson and his colleague Edwin H. Barsis, and was presented in the article ''Reevaluating Amdahl's Law'' in 1988.〔(Reevaluating Amdahl's Law ), John L. Gustafson, Communications of the ACM 31(5), 1988. pp. 532-533. Also as a web page (here )〕 Gustafson's law can be formulated the following way: : where * ''S''latency is the theoretical speedup in latency of the execution of the whole task; * ''s'' is the speedup in latency of the execution of the part of the task that benefits from the improvement of the resources of the system; * ''p'' is the percentage of the execution workload of the whole task concerning the part that benefits from the improvement of the resources of the system ''before the improvement''. Gustafson's law addresses the shortcomings of Amdahl's law, which is based on the assumption of a fixed problem size, that is of an execution workload that does not change with respect to the improvement of the resources. Gustafson's law instead proposes that programmers tend to set the size of problems to fully exploit the computing power that becomes available as the resources improve. Therefore, if faster equipment is available, larger problems can be solved within the same time. The impact of Gustafson's law was to shift research goals to select or reformulate problems so that solving a larger problem in the same amount of time would be possible. In a way the law redefines efficiency, due to the possibility that limitations imposed by the sequential part of a program may be countered by increasing the total amount of computation. ==Derivation== A task executed by a system whose resources are improved compared to a initial similar system can be split up into two parts: * a part that does not benefit from the improvement of the resources of the system; * a part that benefits from the improvement of the resources of the system. ''Example.'' — A computer program that processes files from disk. A part of that program may scan the directory of the disk and create a list of files internally in memory. After that, another part of the program passes each file to a separate thread for processing. The part that scans the directory and creates the file list cannot be sped up on a parallel computer, but the part that processes the files can. The execution workload of the whole task before the improvement of the resources of the system is denoted ''W''. It includes the execution workload of the part that does not benefit from the improvement of the resources and the execution workload of the one that benefits from it. The percentage of the execution workload of the whole task concerning the part that benefits from the improvement of the resources ''before the improvement'' is denoted ''p''. The one concerning the part that does not benefit from it is therefore . Then : It is the execution of the part that benefits from the improvement of the resources that is sped up by the factor ''s'' after the improvement of the resources. Consequently, the execution workload of the part that does not benefit from it remains the same, while that of the part that benefits from it scales with ''s'': : The theoretical execution workload ''W''(''s'') of the whole task after the improvement of the resources is then : Amdahl's law gives the theoretical speedup in latency of the execution of the whole task ''at fixed time T'', which yields : 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Gustafson's law」の詳細全文を読む スポンサード リンク
|